Scalable importance tempering and Bayesian variable selection
نویسندگان
چکیده
منابع مشابه
Scalable Bayesian Kernel Models with Variable Selection
Nonlinear kernels are used extensively in regression models in statistics and machine learning since they often improve predictive accuracy. Variable selection is a challenge in the context of kernel based regression models. In linear regression the concept of an effect size for the regression coefficients is very useful for variable selection. In this paper we provide an analog for the effect ...
متن کاملImportance tempering
Simulated tempering (ST) is an established Markov chain Monte Carlo (MCMC) method for sampling from a multimodal density π(θ). Typically, ST involves introducing an auxiliary variable k taking values in a finite subset of [0, 1] and indexing a set of tempered distributions, say πk(θ) ∝ π(θ) k. In this case, small values of k encourage better mixing, but samples from π are only obtained when the...
متن کاملBayesian recursive variable selection
In this work we introduce a new model space prior for Bayesian variable selection in linear regression. This prior is designed based on a recursive constructive procedure that randomly generates models by including variables in a stagewise fashion. We provide a recipe for carrying out Bayesian variable selection and model averaging using this prior, and show that it possesses several desirable ...
متن کاملBayesian Shrinkage Variable Selection
We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS ). This approach is inspired by the Relevance Vector Machine (RVM ), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is typically applied in the context of kernel regression although it is also suitable in the standar...
متن کاملBayesian Grouped Variable Selection
Traditionally, variable selection in the context of linear regression has been approached using optimization based approaches like the classical Lasso. Such methods provide a sparse point estimate with respect to regression coefficients but are unable to provide more information regarding the distribution of regression coefficients like expectation, variance estimates etc. In the recent years, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of the Royal Statistical Society: Series B (Statistical Methodology)
سال: 2019
ISSN: 1369-7412
DOI: 10.1111/rssb.12316